Despite the advent of autonomous cars, it's likely - at least in the nearfuture - that human attention will still maintain a central role as a guaranteein terms of legal responsibility during the driving task. In this paper westudy the dynamics of the driver's gaze and use it as a proxy to understandrelated attentional mechanisms. First, we build our analysis upon twoquestions: where and what the driver is looking at? Second, we model thedriver's gaze by training a coarse-to-fine convolutional network on shortsequences extracted from the DR(eye)VE dataset. Experimental comparison againstdifferent baselines reveal that the driver's gaze can indeed be learnt to someextent, despite i) being highly subjective and ii) having only one driver'sgaze available for each sequence due to the irreproducibility of the scene.Eventually, we advocate for a new assisted driving paradigm which suggests tothe driver, with no intervention, where she should focus her attention.
展开▼